† Corresponding author. E-mail:
Project supported by the National Key R&D Program of China (Grant Nos. 2017YFA0303604 and 2019YFA0308500), the Youth Innovation Promotion Association of Chinese Academy of Sciences (Grant No. 2018008), the National Natural Science Foundation of China (Grant Nos. 11674385, 11404380, 11721404, and 11874412), and the Key Research Program of Frontier Sciences of Chinese Academy of Sciences (Grant No. QYZDJSSW-SLH020).
The further development of traditional von Neumann-architecture computers is limited by the breaking of Moore’s law and the von Neumann bottleneck, which make them unsuitable for future high-performance artificial intelligence (AI) systems. Therefore, new computing paradigms are desperately needed. Inspired by the human brain, neuromorphic computing is proposed to realize AI while reducing power consumption. As one of the basic hardware units for neuromorphic computing, artificial synapses have recently aroused worldwide research interests. Among various electronic devices that mimic biological synapses, synaptic transistors show promising properties, such as the ability to perform signal transmission and learning simultaneously, allowing dynamic spatiotemporal information processing applications. In this article, we provide a review of recent advances in electrolyte- and ferroelectric-gated synaptic transistors. Their structures, materials, working mechanisms, advantages, and disadvantages will be presented. In addition, the challenges of developing advanced synaptic transistors are discussed.
Traditional digital computers based on the von Neumann architecture have been fully developed over the past few decades, and largely promoted the development of industry, science, and technology. They are ideal for solving structured problems, e.g., explicit mathematical problems, or processing precisely defined data sets.[ 1,2] Nevertheless, there are two main obstacles that hinder their further improvement: 1) Moore’s law-based device scaling and the accompanying technology development have been significantly slowing down; 2) the physical separation of memory and data processing units increases the cost for “big data” movements, known as the von Neumann bottleneck.[3] These issues make von Neumann computers uncomparable with the human brain when dealing with uncertainty, ambiguity, and contradiction in the natural world.
Compared to von Neumann computers, the human brain works in a completely different fashion: 1) it is massively parallel and extremely compact, 2) it is power efficient, 3) it is fault-tolerant and robust, 4) it combines storage and computation, 5) it is self-learning and adaptive to changing environments.[1] The human brain consists of ∼ 1011 neurons connected by ∼ 1015 synapses, which control motion, thinking, learning, and memory.[4] As basic units of the human nervous system, synapses regulate the strengths of neuron connections, and simultaneously store and process information. When neuron spikes arrive at the synapses, the connection strength, i.e., the synaptic weight, is changed in a specific way.[5,6] This adaptability, also called synaptic plasticity, is considered to be the main principle underlying learning and memory functions in the brain.[7–9] Although the operating speed of a neuron or synapse is much slower than that of a complementary metal-oxide-semiconductor (CMOS) transistor, the parallel processing mechanism makes the biological brain outperform von Neumann computers and demonstrate superior performance when solving tasks like video or voice recognition, image analysis, etc., and with an ultralow energy consumption (∼20 W) and a small volume occupation (∼ 1200 cm3).[10–13] Given the impressive computational performance of the human brain, a stream of research on “brain-inspired computing” (neuromorphic computing) has recently been presented.[14–21]
Currently, great efforts have being devoted to implement neuromorphic computing. One method is emulation at software level using CMOS integrated circuits. A more recent example is Intel’s Loihi chip, announced in 2017, which has 128 neuromorphic cores, each containing 1024 primitive spiking neural units grouped into tree-like structures. The Loihi chip can implement not only inference but also self-learning using spike neural networks (SNNs).[22] However, such CMOS-based approaches often require tens of transistors to emulate a single neuron or synapse; the Loihi chip has 2.07 billion transistors to mimic around 131 thousand simulated neurons and 130 million synapses. They are energy intensive and limited in terms of further scalability due to the slowdown in CMOS scaling. Another method is the hardware implementation based on artificial neuromorphic devices. In the past decades, tremendous efforts have been devoted to implement artificial neurons and artificial synapses using a variety of emerging materials and devices.[1,23,24] Herein we mainly focus on the latter — artificial synapses. Initially, two-terminal memristors, such as resistive random access memory (RRAM),[25–28] phase-change memory (PCM),[29] ferroelectric random access memory (FRAM),[30] and magnetic random access memory (MRAM),[31,32] were investigated extensively as artificial synapses. These devices enabled several important advances in image recognition and data classification.[33,34] Recently, three-terminal synaptic transistors were presented as a more advantageous solution than two-terminal memristors due to their good stability, relatively controllable testing parameters, clear operation mechanism, and the capability to be constructed from a variety of materials.[35,36] A typical synaptic transistor has a structure shown in Fig.
In this topical review, the recent advances in high-performance synaptic transistors are discussed. The article is structured as follows. In Section
In biological nervous systems, there are two types of fundamental synapses: electrical and chemical. Electrical synapses pass ionic current directly and mainly exist in invertebrates such as crustaceans and fishes, while chemical synapses constitute the majority of synapses in the human brain.[35] Thus, we will mainly focus on the latter. Figure
Short-term plasticity occurs at a timescale from milliseconds to seconds and can be categorized as short-term potentiation (STP) and short-term depression (STD).[5,48] STP/STD refers to the increase/decrease of synaptic transmission in a short-term mode, including paired-pulse facilitation/depression (PPF/PPD), post-tetanic potentiation (PTP), etc. PPF is a typical phenomenon in which the EPSC evoked by a pulse is increased when a second pulse follows closely.[48,49] The PPF ratio can be defined as PPF = (A2/A1)× 100%, where A1 and A2 are the EPSC peaks of the first and second pulses, respectively. This ratio decreases with the increase of the time interval between the pair of pulses and can be approximated using a double decay function:[5] PPF = C1 exp (–t/τ1)+ C2 exp (–t/τ2)+1, where t is the interval between the two pulses. C1 and C2 are the initial facilitation magnitudes and τ1 and τ2 are the characteristic relaxation time of the rapid and slow phases, respectively. Conversely, PPD depicts a phenomenon in which the EPSC evoked by the second pulse is smaller than that of the first pulse, and its ratio increases with the increase of the time interval between the two pulses.[50–52] PTP is a transient enhancement of synaptic weight caused by intense consecutive synaptic activities in a short period of time;[48,53] in a way, it is like an extension of PPF. Figure
Long-term plasticity can last for several hours or longer and can be categorized into long-term potentiation (LTP) and long-term depression (LTD), in which the synaptic strength can be permanently improved and diminished through repeated stimulation, respectively.[55,56] It is believed that the LTP is the key to our brain’s memory formation.[57,58] In synaptic transistors, LTP/LTD can be implemented using electrochemical doping/de-doping of the channel layer or electric field control via ferroelectric switching.
Learning is the most important function of the biological brain, thus revealing the rules and theories of learning would be of great significant in achieving neuromorphic computing. As early as the 1940s, Hebb proposed a postulate of synaptic modification through correlated activities, which can be concisely described as “neurons that fire together wire together”.[59] Developed for several decades, the Hebbian learning rule is mainly reflected in two kinds of plasticity: spike-timing-dependent plasticity (STDP) and spike rate-dependent plasticity (SRDP).[2,3] STDP states that the change of synaptic weight is a function of the time difference between post-synaptic and pre-synaptic spikes.[7,60,61] It can be symmetric or asymmetric, as shown in Fig.
It can be said with certainty that synaptic plasticity is the biological basis for our brain’s learning and memory functions. In fact, the knowledge of our brain’s mechanisms is still quite limited. Even so, we can still try to realize the neuromorphic computing by simulating the structure and function of biological neural networks and synapses. In the following parts, different high-performance synaptic transistor-based mechanisms and materials will be reviewed.
In general, electrolytes are insulators of electrons and holes, but good conductors of ions. EGTs can use the ions in the electrolyte dielectric layer effectively to regulate the conductance of the channel. They mainly work in two ways. One is that the functional ions in the electrolyte move under the action of the applied gate electric field and gather on the interface between the semiconductor and the dielectric, thus forming a double electric layer with high capacitance. In the second, an electric double layer is also formed but as the gate field intensifies, functional ions in the electrolyte are inserted into the channel material to further regulate the channel conductance in a non-volatile manner. Since it can simulate both short-term and long-term synaptic plasticities, the latter regulation mechanism has become an important research topic. The functional ions in electrolytes are varied, and they can be cations or anions, such as H+,[54,65–70] Li+,[19,71–73] O2–,[74–77] etc. In this section, we will present EGTs utilizing different functional ions.
Protons, the smallest-sized cations, exist in a wide range of electrolytes, inorganic, organic, solid, or liquid. Due to their small volume and light weight, protons in electrolytes can be easily driven using an external electric field and inserted into the lattice spacing of the channel layer. In 2014, Wan et al. proposed an indium-zinc-oxide-based synaptic transistor gated using phosphorus-doped SiO2 nanogranular proton-conducting electrolyte films.[40] Synaptic plasticity functions, such as STDP (Fig.
Compared to inorganic materials, organic materials have many unique advantages, such as cheap processing, easily changeable chemical structures, and large free volumes. In 2017, Salleo’s group demonstrated a redox synaptic transistor, which used PEDOT: PSS as the channel layer and nafion as the electrolyte.[81] Moreover, the device was fabricated on a flexible PET substrate. This all-plastic device proved the potential of low-cost fabrication of transistor arrays, which might enable the integration of neuromorphic functionality in flexible large-area electronic systems. In 2019, Elliot et al. further combined the redox transistor with a conductive-bridge memory (CBM), and named it the ionic floating-gate memory (IFG).[14] Figures
Actually, many natural polymer materials are also good H+-conducting electrolytes and can be used for synaptic transistor design. Wan’s group in Nanjing University reported several achievements in this field. It is interesting to note that Wu et al. used natural chicken egg albumen with high proton conductivity to prepare the synaptic devices.[82] The device structure is shown in Fig.
Some heavier ions, such as Li+, Ag+, etc., have also been used as functional ions to modulate the channel layer’s conductance. An early study was reported by Chen et al. in 2010,[84] where the synaptic transistor had the Si n–p–n source–channel–drain structure of a conventional MOS transistor, as shown in Fig.
Oxygen ions can also be the functional ions in electrolytes of high-performance synaptic transistors. The conductance of some oxides, such as SmNiO3, is very sensitive to the respective oxides’ stoichiometric ratio.[74] Stoichiometry can be modulated in situ through an ionic liquid electrolyte gate electrode. The transmission and reception of oxygen ions in channel materials are the basis for the implementation of synaptic functions. Recently, SrFeOx (SFO) and SrCoOx (SCO) have been shown to be excellent channel materials for oxygen ion synaptic transistors.[75,76,85] They have similar characteristics, so we only discuss SFO, the crystal structure of which exhibits a variety of distinct oxygen-deficient perovskite (PV) structures, with x ranging from 2.5 to 3, depending on its oxygen stoichiometry. The brownmillerite (BM)-SrFeO2.5 structure with alternating stacks of FeO4 tetrahedral and FeO6 octahedron layers is an insulator, while PV-structure with corner-sharing FeO6 octahedra exhibits metallic conduction. Furthermore, they can be electrically transformed to each other through electrolyte-gating, as shown in Fig.
Although the above mentioned EGTs can mimic various important synaptic functions, such as short-term synaptic plasticity, LTP, LTD, STDP, SRDP, etc., a reliable synaptic device for neuromorphic computing is still missing, as are fundamental analyses on device scalability, endurance, operating speed, and stability of electrolytes. Moreover, a network level demonstration with several such devices is urgently needed. Besides that, the effect of mobile ion type on synaptic transistors’ performance is an issue of common interest. H+, the smallest ion, has the highest mobility and is easy to insert into the interstitial void of materials. Thus, the H+-EGTs are expected to have fast write speed and low programming energy. Li+-EGTs are similar to H+-EGTs, but with larger programming energy and more robust retention. O2–-EGTs with excellent retention property are relatively different from the aforementioned two. The O2–-EGTs are usually slower because the O2– ion is much heavier than H+ and Li+.
Ferroelectric materials have a spontaneous electric polarization that can be reversed by the application of an external electric field. Tremendous efforts have been devoted to ferroelectric devices.[87–90] In a ferroelectric gated field effect transistor (FeFET), the carrier density can be modulated by changing the polarization state of ferroelectric materials using the gate voltage due to the Coulomb interaction between the ferroelectric polarization and the carriers in the channel.[91] Traditional FeFETs use the two remnant polarization states of ferroelectric materials to realize two digital states of the memory and have been intensively investigated.[89–94] Due to the multi-domain polarization switching capability of ferroelectric materials, FeFETs can also have multi-conductance levels, and can be used to record the synaptic weights of artificial synapses.[44,90,95–99]
In 2012, Nishitani et al. fabricated an FGT which used Pb(Zr,Ti)O3 (PZT) as the gate insulator and semiconductor ZnO for the channel layer.[38] As shown in Fig.
Due to the high preparation temperature condition, the fabrication of the above oxide ferroelectric materials is limited to only some substrates. Given this situation, FGTs with organic ferroelectric gated materials, mainly PVDF-TrFE, attracted more attention.[44,95–98] Recently, Jang et al. fabricated an ultrathin artificial synapse that featured freestanding ferroelectric organic neuromorphic transistors (FONTs), which does not require a substrate or an encapsulation layer.[44] The device structure is shown in Fig.
Although FGTs demonstrate some promising features, such as high stability, large on/off ratio, fast programming operations, as well as fewer variations in the weight update curve,[107] they also suffer from the similar scaling problems as DRAM and floating gate memories because in essence they all are charge-based memories.[16] Moreover, it is difficult to simulate short-term synaptic plasticity in these FGTs. Therefore, much effort needs to be put into solving these problems.
Recent experiments have made great progress not only on the transistor device level, but also in simulating functionalities of the human nervous system. First, synaptic transistors have been used to construct a tactile perception system.[108,109] In 2018, Kim et al. built a tactile sensing system using flexible organic electronic devices.[110] As shown in Fig.
In addition to simulating tactile senses, synaptic transistors have also been applied successfully in simulating human hearing. Based on capacitively coupled multi-terminal oxide synaptic transistors, He et al. constructed a simple artificial neural network to simulate the sound orientation detection of the human brain.[108] The system consists of two gate electrodes and two sets of source/drain terminals, considered as pre-synapses (PREN1 and PREN2) and post-synapses (POSTN1 and POSTN2), respectively (Fig.
For real-world applications, it is highly desirable to study neuromorphic devices enabled by non-electrical signals (such as photon, pressure, chemical signals) to avoid the problems caused by transduction.[112–114] Recently, Yu et al. designed a novel optoelectronic neuromorphic device based on a PN-junction-decorated oxide synaptic transistor and demonstrated ocular simulation.[115] The system consists of four parts: a CMOS photon sensor, a signal processing unit, an electronic memristive synapse (Fig.
Apart from the above technologies mentioned, many significant achievements have been achieved in artificial sensory systems and neuromorphic computations based on synaptic transistors, such as PH detection,[116] spiking humidity detection,[117] and neuronal arithmetic.[118] These studies laid the foundations for the application of synaptic transistors in neural computing and artificial intelligence.
In light of the great advantages of high efficiency, low power consumption, and self-learning ability, neuromorphic computing can be the footstone of future artificial intelligence. As the basic unit of a neuromorphic computing hardware, synaptic devices have been studied extensively. For an overview of this field, we summarized synaptic plasticity, learning rules, and recent advances in high-performance synaptic transistors, which are capable of simulating biological synapses. Depending on the different working mechanisms of the dielectric layer, the synaptic transistors are divided into electrolyte-gated and ferroelectric-gated. In the former, electrostatic and electrochemical effects occur at the electric-double-layer, which is formed at the interface of the semiconducting channel layer/electrolyte. Then, the synaptic weight can be modulated via the doping or de-doping of active ions, which can be cations (such as H+, Li+, etc.) or anions (such as O2–, S2–, etc.). In FGTs, the channel conductance can be altered depending on the density of electrons induced by the polarization of the ferroelectric film, which can be controlled by applying the gate voltage in a non-volatile manner. Both inorganic and organic ferroelectric layer-based synaptic transistors are reviewed in this article. We hope this article inspires researchers to explore new materials and structure designs to implement better synaptic transistors.
Despite the great advances of synaptic transistors, there are still many challenges to be overcome before further general applications in neuromorphic computing. First, the performance of current synaptic transistors is not good enough. An ideal synaptic transistor should have: 1) small size for integration (< 1 μm2); 2) a large number of states (∼ 100); 3) linear and symmetric conductance tuning; 4) low switching noise (< 0.5% of the weight range); 5) low switching energy consumption (< 1 pJ per switching event); 6) fast write/read speed (< 1 μs); 7) long state retention time (103–108 s) and 7) excellent write endurance (∼ 109).[15] Yet, none of the reported works satisfy all these requirements. Second, the processing technology of most of the previous works is not compatible with standard micro-electronic technology. Moreover, it is hard to realize the 3D integration of synaptic transistors and their interconnection is also a great challenge. Therefore, the use of synaptic transistors for neuromorphic computing still has a long way to go.
[1] | |
[2] | |
[3] | |
[4] | |
[5] | |
[6] | |
[7] | |
[8] | |
[9] | |
[10] | |
[11] | |
[12] | |
[13] | |
[14] | |
[15] | |
[16] | |
[17] | |
[18] | |
[19] | |
[20] | |
[21] | |
[22] | |
[23] | |
[24] | |
[25] | |
[26] | |
[27] | |
[28] | |
[29] | |
[30] | |
[31] | |
[32] | |
[33] | |
[34] | |
[35] | |
[36] | |
[37] | |
[38] | |
[39] | |
[40] | |
[41] | |
[42] | |
[43] | |
[44] | |
[45] | |
[46] | |
[47] | |
[48] | |
[49] | |
[50] | |
[51] | |
[52] | |
[53] | |
[54] | |
[55] | |
[56] | |
[57] | |
[58] | |
[59] | |
[60] | |
[61] | |
[62] | |
[63] | |
[64] | |
[65] | |
[66] | |
[67] | |
[68] | |
[69] | |
[70] | |
[71] | |
[72] | |
[73] | |
[74] | |
[75] | |
[76] | |
[77] | |
[78] | |
[79] | |
[80] | |
[81] | |
[82] | |
[83] | |
[84] | |
[85] | |
[86] | |
[87] | |
[88] | |
[89] | |
[90] | |
[91] | |
[92] | |
[93] | |
[94] | |
[95] | |
[96] | |
[97] | |
[98] | |
[99] | |
[100] | |
[101] | |
[102] | |
[103] | |
[104] | |
[105] | |
[106] | |
[107] | |
[108] | |
[109] | |
[110] | |
[111] | |
[112] | |
[113] | |
[114] | |
[115] | |
[116] | |
[117] | |
[118] |